Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

content: source track: update source threats for draft spec #1236

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

zachariahcox
Copy link
Contributor

@zachariahcox zachariahcox commented Nov 18, 2024

fixes: #1178

The threats.md file is intended to be a one-stop summary of all the threats mitigated by SLSA.
From this page, we should plan to redirect to other parts of the documentation.

Copy link

netlify bot commented Nov 18, 2024

Deploy Preview for slsa ready!

Name Link
🔨 Latest commit 4a19699
🔍 Latest deploy log https://app.netlify.com/sites/slsa/deploys/67c620819fb62600089818ca
😎 Deploy Preview https://deploy-preview-1236--slsa.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 1 out of 1 changed files in this pull request and generated no suggestions.

Comments skipped due to low confidence (2)

docs/spec/draft/threats.md:86

  • [nitpick] The word 'SHOULD' should not be in all caps unless it is a specific requirement in a specification. Consider changing it to 'should'.
Trustworthiness scales with transparency, and consumers SHOULD push on their vendors to follow transparency best-practices.

docs/spec/draft/threats.md:86

  • [nitpick] The phrase 'Trustworthiness scales with transparency' is unclear. Consider rephrasing it to 'Trustworthiness increases with transparency'.
Trustworthiness scales with transparency, and consumers SHOULD push on their vendors to follow transparency best-practices.

Tip: Turn on automatic Copilot reviews for this repository to get quick feedback on every pull request. Learn more

Copy link
Contributor

@TomHennen TomHennen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just the one comment, otherwise this looks great!

Copy link
Member

@adityasaky adityasaky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this isn't a complete review because I wasn't sure if more changes are coming, so I'm submitting this for visibility for now.

<details><summary>Software producer intentionally submits bad code</summary>
*Mitigation:*
This kind of attack cannot be directly mitigated through SLSA controls.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific producers, but do not fully remove it.

Also, can we point to some specific ways to use scorecard here? I think we want to be careful with what we say can help with something like the xz attack...

*Mitigation:*
This kind of attack cannot be directly mitigated through SLSA controls.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it.
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Transparency" is unfortunately an overloaded word in our world. WDYM specifically in this context?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oof, that's a good callout.

I propose that I mean:

  • open source
  • open build
  • open policies
  • published "definitions of correctness."

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be inclined to remove the sentence all together. Yes, transparency (visibility into the process that creates the software) is one way of increasing trustworthiness but there are other vectors too. "Does this organization have something to lose if they do a bad thing? Do I think that's enough of an incentive for them not to do the bad thing".

Additionally, I don't think "Transparency best-practices" are well defined anywhere (or even vaguely agreed upon...).

Perhaps the easiest way forward is

  1. to focus on the submission of bad source (so open builds don't apply here) and assume the other parts of the chain are protected (at the very least they're covered by other threats on this page).
  2. to be a bit more hand-wavy since this is a squishy subject:
**Mitigation**: Users must establish some basis to trust the organization they are consuming software from.  That basis may be 1) that the code is open sourced **and** has a sufficiently large user-base that makes it more likely for malicious changes to be detected, 2) that the organization has sufficient incentives (legal, reputational, etc.) to dissuade it from making malicious changes.  Ultimately this is a judgement call with no straightforward answer.

Copy link
Contributor Author

@zachariahcox zachariahcox Dec 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomHennen I think SLSA tooling gives you a way to unilaterally improve trust when the source and builds are open source. This is possible independently of whether or not the producer has something to lose.

I agree though that this section needs to be reduced.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left two suggestions. wdyt @trishankatdatadog @TomHennen ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a comment down there...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good ideas. I'll incorporate these.

*Mitigation:*
This kind of attack cannot be directly mitigated through SLSA controls.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it.
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be inclined to remove the sentence all together. Yes, transparency (visibility into the process that creates the software) is one way of increasing trustworthiness but there are other vectors too. "Does this organization have something to lose if they do a bad thing? Do I think that's enough of an incentive for them not to do the bad thing".

Additionally, I don't think "Transparency best-practices" are well defined anywhere (or even vaguely agreed upon...).

Perhaps the easiest way forward is

  1. to focus on the submission of bad source (so open builds don't apply here) and assume the other parts of the chain are protected (at the very least they're covered by other threats on this page).
  2. to be a bit more hand-wavy since this is a squishy subject:
**Mitigation**: Users must establish some basis to trust the organization they are consuming software from.  That basis may be 1) that the code is open sourced **and** has a sufficiently large user-base that makes it more likely for malicious changes to be detected, 2) that the organization has sufficient incentives (legal, reputational, etc.) to dissuade it from making malicious changes.  Ultimately this is a judgement call with no straightforward answer.

@TomHennen
Copy link
Contributor

TomHennen commented Dec 4, 2024

If we merge this before 1.1 goes out will that make it harder to remove this from the 1.1 release? Have we decided we want to include this in the 1.1 release (fine with me at this point)?

@lehors thoughts?

@zachariahcox zachariahcox changed the title content: source track: address org threats content: source track: address provider threats Dec 4, 2024
be nice to resolve. For example, compromised developer credentials - is that (A)
or (B)?
-->
*Threat:* A producer intentionally creates a malicious revision with the intent of harming their consumers.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomHennen what if we update this to cover both:

Suggested change
*Threat:* A producer intentionally creates a malicious revision with the intent of harming their consumers.
*Threat:* A producer intentionally creates a malicious revision (or a VSA issuer intentionally creates a malicious attestation) with the intent of harming their consumers.

The mitigation for both is a clearer -- you cannot.

Comment on lines 71 to 78
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices.
When transparency is not possible, consumers may choose not to consume the artifact, or may require additional evidence of correctness from a trusted third-party.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific producers, but do not fully remove it.
For example, a consumer may choose to only consume source artifacts from repositories that have a high score on the OSSF Scorecard.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices.
When transparency is not possible, consumers may choose not to consume the artifact, or may require additional evidence of correctness from a trusted third-party.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific producers, but do not fully remove it.
For example, a consumer may choose to only consume source artifacts from repositories that have a high score on the OSSF Scorecard.
Ultimately, consumers must decide which producers are trusted to produce artifacts and which issuers are trusted to produce VSAs.

We could redirect from here over to "verifying platforms".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for late reply. Why do consumers even need to know which producers are trusted to produce artifacts (source code, I imagine, in this case)? Couldn't they just get away with the Source Track VSA from the issuer, and call it a day so long as its subjects/outputs (e.g., source code) matches the inputs to the next step (e.g., build)?

Comment on lines +26 to +28
**Source integrity:** Ensure that source revisions contain only changes submitted by
authorized contributors according to the process defined by the software producer and
that source revisions are not modified as they pass between development stages.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this wording aligns better with the build integrity section below.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is 'fine', but we do lose the introduction of the concept of 'intent' (which we reference later) which isn't quite the same as 'process'. I'd argue the process is one of the ways the producer tries to ensure their intent is tracked, but it's not really what the end user cares about.

I'd also probably argue that the build process is somewhat different and they don't need to mirror each other. To some extent the source is the closest approximation of the org's intent, and then we just rely on the build system to mechanically transform that source faithfully.

@zachariahcox zachariahcox changed the title content: source track: address provider threats content: source track: update source threats for 1.1 Dec 4, 2024
@@ -267,8 +225,10 @@ does not accept this because the version X is not considered reviewed.

*Threat:* Two trusted persons collude to author and approve a bad change.

*Mitigation:* This threat is not currently addressed by SLSA. We use "two
trusted persons" as a proxy for "intent of the software producer".
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think we should use "two trusted persons" as a proxy for producer intent. We should probably say "the documented change process for the source" if we must, but I prefer just "the intent".

@zachariahcox zachariahcox force-pushed the users/zacox/orgthreat branch from 421ff2b to 97094b5 Compare December 4, 2024 22:27
@zachariahcox
Copy link
Contributor Author

zachariahcox commented Dec 4, 2024

If we merge this before 1.1 goes out will that make it harder to remove this from the 1.1 release? Have we decided we want to include this in the 1.1 release (fine with me at this point)?

@lehors thoughts?

@TomHennen! Potentially I am off base -- is 1.1 going to be different from the /draft folder?
I think this kind of change should go out with the most recent version of the source track, whichever that is 👍
It was not my intention to add items to the 1.1 release if that is different than the drafts folder.
In the meantime I'll update the title to talk about drafts spec instead of 1.1

@zachariahcox zachariahcox changed the title content: source track: update source threats for 1.1 content: source track: update source threats for draft spec Dec 4, 2024
@lehors
Copy link
Member

lehors commented Dec 5, 2024

If we merge this before 1.1 goes out will that make it harder to remove this from the 1.1 release? Have we decided we want to include this in the 1.1 release (fine with me at this point)?
@lehors thoughts?

@TomHennen! Potentially I am off base -- is 1.1 going to be different from the /draft folder? I think this kind of change should go out with the most recent version of the source track, whichever that is 👍 It was not my intention to add items to the 1.1 release if that is different than the drafts folder. In the meantime I'll update the title to talk about drafts spec instead of 1.1

I'm glad @TomHennen remembers that we need to get 1.1 finalized. I was going to bring that up myself looking at this PR. Unfortunately, we don't have a very clean way of working on several versions in parallel.

The plan was to only work in the draft folder and when all 1.1 issues have been addressed to create a new folder for 1.1 selecting (manually) from the draft what is relevant. Of course this is really only practical if the differences are fairly coarse (like whole files to be left out). If the content of some files start changing in ways that are only relevant for the next version this operation becomes quite impractical.

If we want to go ahead and merge this, I will need to create the 1.1 folder before, and from then on every change made for 1.1 will have to be made in both folders (1.1 and draft). Not ideal but doable.

@TomHennen
Copy link
Contributor

If we merge this before 1.1 goes out will that make it harder to remove this from the 1.1 release? Have we decided we want to include this in the 1.1 release (fine with me at this point)?
@lehors thoughts?

@TomHennen! Potentially I am off base -- is 1.1 going to be different from the /draft folder? I think this kind of change should go out with the most recent version of the source track, whichever that is 👍 It was not my intention to add items to the 1.1 release if that is different than the drafts folder. In the meantime I'll update the title to talk about drafts spec instead of 1.1

I'm glad @TomHennen remembers that we need to get 1.1 finalized. I was going to bring that up myself looking at this PR. Unfortunately, we don't have a very clean way of working on several versions in parallel.

The plan was to only work in the draft folder and when all 1.1 issues have been addressed to create a new folder for 1.1 selecting (manually) from the draft what is relevant. Of course this is really only practical if the differences are fairly coarse (like whole files to be left out). If the content of some files start changing in ways that are only relevant for the next version this operation becomes quite impractical.

If we want to go ahead and merge this, I will need to create the 1.1 folder before, and from then on every change made for 1.1 will have to be made in both folders (1.1 and draft). Not ideal but doable.

I wonder if at this point we can just wait and include the source track (and the build track if it's ready). We've waited this long, and there don't seem to be that many outstanding issues elsewhere. By trying to separate them now we'd probably just cause ourselves more work for less gain. WDYT?

@lehors
Copy link
Member

lehors commented Dec 6, 2024

I wonder if at this point we can just wait and include the source track (and the build track if it's ready). We've waited this long, and there don't seem to be that many outstanding issues elsewhere. By trying to separate them now we'd probably just cause ourselves more work for less gain. WDYT?

You mean to skip 1.1? That would solve the problem of course. It's true that the more time goes by the less relevant it is.

@TomHennen
Copy link
Contributor

I wonder if at this point we can just wait and include the source track (and the build track if it's ready). We've waited this long, and there don't seem to be that many outstanding issues elsewhere. By trying to separate them now we'd probably just cause ourselves more work for less gain. WDYT?

You mean to skip 1.1? That would solve the problem of course. It's true that the more time goes by the less relevant it is.

Not so much "skip" as include the source track in 1.1. New tracks are supposed to go in minor releases

@TomHennen
Copy link
Contributor

We had a long, long, chat about tracks, threats pages, etc... at yesterday's meeting. Do we still like this PR or do we want to simplify it some? I have no opinion at this time.

@zachariahcox zachariahcox force-pushed the users/zacox/orgthreat branch from 97094b5 to 433469c Compare March 3, 2025 18:22
@@ -54,3 +54,14 @@ the basis for a future Build Platform Operations track:
</section>

[draft version (v0.1)]: ../v0.1/requirements.md


## Source Track
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added the "objective" section from source-requirements.md, updated the verbs to be future tense.
this is to be the target of future-direction links.

the fork was reviewed.
colleague (who is not trusted by MyPackage), then proposes the change to
the upstream repo.
Solution: The proposed change still requires two-person review in the upstream context even though it received two-person review in another context.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the words to be a little more precise.
In general, I'd expect reviews to be transitive for some situations. but yeah, if your org thinks this is a problem, then this is how you'd fix it.

@zachariahcox
Copy link
Contributor Author

Howdy friends! I have updated this PR to reference only 1.1 legal concepts.
I also incorporated the most recent feedback from reviewers (thank you!)

There were a lot of structural changes to the repo and I ultimately rebased these smallish patches.
Apologies if your comments fell off! I think I got to all of them but please let me know.

cc: @TomHennen @adityasaky @lehors @trishankatdatadog @arewm

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

Copy link
Contributor

@TomHennen TomHennen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good!

*Example:* Adversary creates a pull request using a secondary account and approves it using their primary account.

Solution: The producer must require strongly authenticated user accounts and ensure that all accounts map to unique persons.
A common vector for this attack is to take over a robot account with the permission to contribute code.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optional: Robot accounts are discussed below so this seems somewhat redundant. Suggest removing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's probably fine. I think these are different threat models, but not sure if the difference is worth calling out, or if these two blurbs are doing so.
This one is about gaining access to two accounts with write and the robots one is about gaining access to an account with bypass.

The mitigation for the first is hard to execute and protect against. You'd have to somehow prevent account takeovers.

The mitigation for the robots one is easier to execute (just run some kind of automation with a poisoned payload, usually) and protect against-- just do not allow any account to bypass code review requirements.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. I guess I'll leave it to you?

review for these changes.
robot, which is allowed to submit without review.
Adversary compromises the robot and submits a malicious change.
Solution: Require review for such changes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional:

Suggested change
Solution: Require review for such changes.
Solution: Require review for such changes or ensure all changes relevant to the robot's behavior require review.

E.g. imagine if the Robot is a GHA that generates code. If the workflow that runs that code is reviewed, and the generation tool is reviewed, then it could be safe?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(commented on the other thread)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose I think there are legitimate use cases for bypassing code review by robots, and I think it is possible to do safely?

I also don't think we have to address that here and now, especially if we don't have a fully fleshed out framework for doing so.

Comment on lines +26 to +28
**Source integrity:** Ensure that source revisions contain only changes submitted by
authorized contributors according to the process defined by the software producer and
that source revisions are not modified as they pass between development stages.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is 'fine', but we do lose the introduction of the concept of 'intent' (which we reference later) which isn't quite the same as 'process'. I'd argue the process is one of the ways the producer tries to ensure their intent is tracked, but it's not really what the end user cares about.

I'd also probably argue that the build process is somewhat different and they don't need to mirror each other. To some extent the source is the closest approximation of the org's intent, and then we just rely on the build system to mechanically transform that source faithfully.

zachariahcox and others added 4 commits March 3, 2025 13:23
Co-authored-by: Tom Hennen <[email protected]>
Signed-off-by: Zachariah Cox <[email protected]>
Co-authored-by: Tom Hennen <[email protected]>
Signed-off-by: Zachariah Cox <[email protected]>
Co-authored-by: Tom Hennen <[email protected]>
Signed-off-by: Zachariah Cox <[email protected]>
Co-authored-by: Tom Hennen <[email protected]>
Signed-off-by: Zachariah Cox <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 🆕 New
Status: No status
Development

Successfully merging this pull request may close these issues.

TODO: Need mitigation description for "Software producer intentionally submits bad code" threat
6 participants